-
Notifications
You must be signed in to change notification settings - Fork 623
feat: the CLOAK privacy solution #1737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThis PR introduces validium mode support with versioning and decryption key handling across the coordinator, rollup, and prover. It updates message structures, task payloads, and verifier initialization to propagate version information and encryption keys. It modifies C-level function signatures and Go wrappers for libzkp integration, adds version/encryption fields to task details, and extends configuration with validium mode flags. Changes
Sequence Diagram(s)sequenceDiagram
participant Coordinator
participant ProverTask
participant LibZKP
participant Sequencer Config
rect rgb(200, 220, 255)
note over Coordinator,Sequencer Config: New Validium Decryption Key Flow
Coordinator->>ProverTask: GenerateUniversalTask(taskType, taskJSON, fork, expectedVk, decryptionKey)
ProverTask->>Sequencer Config: Fetch decryption_key from config
Sequencer Config-->>ProverTask: decryptionKey (hex string)
ProverTask->>ProverTask: hex.Decode(decryptionKey) to []byte
ProverTask->>LibZKP: gen_universal_task(..., decryptionKey, len)
LibZKP->>LibZKP: For Chunk: construct ValidiumInputs if decryptionKey present
LibZKP-->>ProverTask: result
ProverTask-->>Coordinator: task with encrypted/decrypted data
end
sequenceDiagram
participant Coordinator
participant VersionUtils
participant ProverTask
rect rgb(240, 220, 200)
note over Coordinator,ProverTask: Version Resolution Flow (ValidiumMode)
Coordinator->>VersionUtils: Version(hardForkName, ValidiumMode)
alt ValidiumMode == true
VersionUtils->>VersionUtils: domain = 1, stfVersion = 1
else ValidiumMode == false
VersionUtils->>VersionUtils: domain = 0, stfVersion = fork_to_stf(hardForkName)
alt Fork Unknown
VersionUtils-->>Coordinator: error
end
end
VersionUtils->>VersionUtils: combined = (domain << 6) | stfVersion
VersionUtils-->>Coordinator: uint8(combined)
Coordinator->>ProverTask: Store version in TaskDetail
end
sequenceDiagram
participant L2Watcher
participant RpcClient
participant Validium Path
participant Transaction Processing
rect rgb(200, 240, 200)
note over L2Watcher,Transaction Processing: Validium Mode L1 Message Fetching
L2Watcher->>L2Watcher: GetAndStoreBlocks(block)
alt ValidiumMode == true
L2Watcher->>RpcClient: try_fetch_l1_msgs(block_number)
RpcClient->>RpcClient: scroll_getL1MessagesInBlock RPC call
RpcClient-->>L2Watcher: Vec<TxL1Message>
L2Watcher->>L2Watcher: Validate count & queue indices match
L2Watcher->>Transaction Processing: Replace block.Transactions with encrypted L1 messages
L2Watcher->>L2Watcher: log transaction replacement
else ValidiumMode == false
L2Watcher->>Transaction Processing: Use original block.Transactions
end
L2Watcher->>L2Watcher: Store processed block
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| "1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/" | ||
| "1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/", | ||
| "7eb91f1885cc7a63cc848928f043fa56bf747161a74cd933d88c0456b90643346618ea25a7991845bbc5fd571670ee47379ba31ace92d345bca59702a0d4112d": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.6.0-rc.1/chunk/", | ||
| "dc653e7416628c612fa4d80b4724002bad4fde3653aef7316b80df0c19740a1bf2be2f4468d1ac4a70e7682babb1c60417e21c7633d4b55b58f44703ec82b05a": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.6.0-rc.1/batch/", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not need to set the "preset" section if we have upload the corresponding circuit assets via CI
| WithdrawRoot common.Hash `json:"withdraw_root"` | ||
| DataHash common.Hash `json:"data_hash"` | ||
| IsPadding bool `json:"is_padding"` | ||
| // TxBytes []byte `json:"tx_bytes"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double check this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has confirmed that this field has been deprecated for quite a long time and there is no reference to it. The corresponding defination in rust side also has no such a field
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1737 +/- ##
===========================================
- Coverage 36.90% 36.89% -0.01%
===========================================
Files 245 245
Lines 20957 20927 -30
===========================================
- Hits 7734 7721 -13
+ Misses 12401 12384 -17
Partials 822 822
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| let witness = | ||
| let mut witness = | ||
| interpreter.try_fetch_block_witness(block_hash, block_witnesses.last())?; | ||
| if witness.header.number == 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still needed now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not need anymore. Would clean it up.
| stateRoot := b.StateRoot() | ||
|
|
||
| // Temporary workaround for the wrong genesis state root configuration issue. | ||
| if lastBlock.Header.Number.Uint64() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still needed?
| scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "a71dd2b" } | ||
| scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "546172b" } | ||
| scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "546172b" } | ||
| scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "546172b" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double check this commit is in master, not in a random branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has updated to a commit in master:
fix: legacy compatibility for batch and bundle info (#213)
|
|
||
| use ( | ||
| ./bridge-history-api | ||
| // ./bridge-history-api |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Nitpick comments (9)
tests/prover-e2e/README.md (4)
3-3: Clarify what "some blocks" refers to.The phrase "contains data from some blocks in a specified testnet" is vague. Consider being more explicit about which blocks or how they're selected, or reference the staff-set options mentioned in the Prepare section.
-It contains data from some blocks in a specified testnet, and helps to generate a series of chunks/batches/bundles from these blocks, filling the DB for the coordinator, so an e2e test (from chunk to bundle) can be run completely local +It contains block data from a specified testnet (see Prepare section for available networks), and helps generate chunks/batches/bundles from those blocks. This fills the coordinator DB, enabling end-to-end testing locally (from chunk to bundle).
5-8: Improve formatting consistency in the Prepare section.The use of
+for list items is unconventional in markdown. Use standard bullet points or numbered lists for better readability and consistency with the Steps section below.Prepare: -link the staff dir as "conf" from one of the dir with staff set, currently we have following staff sets: -+ sepolia: with blocks from scroll sepolia -+ cloak-xen: with blocks from xen sepolia, which is a cloak network +link the staff dir as "conf" from one of the available staff sets: +- sepolia: blocks from Scroll Sepolia testnet +- cloak-xen: blocks from Xen Sepolia testnet (CLOAK network)
11-11: Fix PostgreSQL capitalization.The correct spelling is "PostgreSQL", not "postgreSql".
-1. run `make all` under `tests/prover-e2e`, it would launch a postgreSql db in local docker container, which is ready to be used by coordinator (include some chunks/batches/bundles waiting to be proven) +1. run `make all` under `tests/prover-e2e`, it would launch a PostgreSQL db in local docker container, which is ready to be used by coordinator (includes chunks/batches/bundles waiting to be proven)
10-17: Add context about what each step accomplishes.The steps are comprehensive, but consider adding brief context about the purpose of each group of steps. For example, steps 1–3 prepare assets and infrastructure, steps 4–6 configure services, and step 7 runs the test. This helps users understand the overall flow.
Steps: +**Setup local infrastructure and assets:** 1. run `make all` under `tests/prover-e2e`, it would launch a postgreSql db in local docker container, which is ready to be used by coordinator (include some chunks/batches/bundles waiting to be proven) 2. download circuit assets with `download-release.sh` script in `zkvm-prover` 3. generate the verifier stuff corresponding to the downloaded assets by `make gen_verifier_stuff` in `zkvm-prover` + +**Configure and launch services:** 4. setup `config.json` and `genesis.json` for coordinator, copy the generated verifier stuff in step 3 to the directory which coordinator would load them 5. build and launch `coordinator_api` service locally 6. setup the `config.json` for zkvm prover to connect with the locally launched coordinator api + +**Run end-to-end test:** 7. in `zkvm-prover`, launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.tests/prover-e2e/prepare/dump_chunk_records.sql (1)
19-41: Use unquoted literals for numeric fields.The script applies
quote_literal()to all fields, including numeric ones (index, block numbers, gas values, timestamps). This is inefficient and non-idiomatic SQL. Numeric and temporal fields should be unquoted to let PostgreSQL infer the correct type.Consider refactoring to conditionally quote only non-numeric fields. Alternatively, use a more direct approach such as
COPYwith a CSV format, which is more efficient for large migrations. If you prefer to keep the current approach, you can improve readability by grouping numeric and text fields:SELECT 'INSERT INTO chunk (' || 'index, hash, start_block_number, start_block_hash, end_block_number, end_block_hash, ' || 'start_block_time, total_l1_messages_popped_before, total_l1_messages_popped_in_chunk, ' || 'prev_l1_message_queue_hash, post_l1_message_queue_hash, parent_chunk_hash, state_root, ' || 'parent_chunk_state_root, withdraw_root, codec_version, enable_compress, ' || 'total_l2_tx_gas, total_l2_tx_num, total_l1_commit_calldata_size, total_l1_commit_gas, ' || 'created_at, updated_at' || ') VALUES (' || - quote_literal(index) || ', ' || + index || ', ' || quote_literal(hash) || ', ' || - quote_literal(start_block_number) || ', ' || + start_block_number || ', ' || - quote_literal(start_block_hash) || ', ' || + quote_literal(start_block_hash) || ', ' || - quote_literal(end_block_number) || ', ' || + end_block_number || ', ' || quote_literal(end_block_hash) || ', ' || - quote_literal(start_block_time) || ', ' || + EXTRACT(EPOCH FROM start_block_time)::text || ', ' || ... quote_literal(updated_at) || ');'Alternatively, consider whether a
COPYor directINSERT ... SELECTapproach would be simpler and more performant.crates/l2geth/src/rpc_client.rs (2)
256-266: Consider adding assertions to strengthen the test.The test structure is appropriate for an integration test requiring external configuration. However, it currently only prints results without verifying correctness.
Optional enhancement: Add basic assertions to validate the response structure, such as checking that each message has expected fields or that the Vec is properly deserialized.
172-204: Add a brief clarifying comment for the "synced" parameter.The hardcoded
"synced"parameter is correct per Scroll's L2geth RPC specification—it indicates the node should return fully synchronized L1 messages. However, adding a comment on line 191 would help maintainers understand its purpose:(block_number_hex, "synced"), // Request finalized/synchronized L1 messages from Scroll nodeThis makes it explicit that "synced" is a required state selector, not an arbitrary value, and clarifies the method's behavior.
tests/prover-e2e/cloak-xen/.make.env (1)
1-2: LGTM!Test configuration is clear and straightforward.
Consider adding a newline at the end of the file to follow POSIX conventions:
BEGIN_BLOCK?=35 END_BLOCK?=49 +coordinator/internal/utils/version.go (1)
8-11: Consider documenting the bit layout for DomainOffset.The constants define a version encoding scheme where
DomainOffset=6places the domain in bits 6-7 and the STF version in bits 0-5. Consider adding a comment explaining this bit layout for maintainability.Apply this diff to add documentation:
const ( + // DomainOffset defines the bit position for the domain field in the version byte. + // Version encoding: bits 6-7 = domain (0-3), bits 0-5 = stfVersion (0-63) DomainOffset = 6 STFVersionMask = (1 << DomainOffset) - 1 )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
Cargo.lockis excluded by!**/*.lockgo.workis excluded by!**/*.workgo.work.sumis excluded by!**/*.sumrollup/go.sumis excluded by!**/*.sum
📒 Files selected for processing (49)
Cargo.toml(1 hunks)common/types/message/message.go(4 hunks)common/version/version.go(1 hunks)coordinator/cmd/tool/verify.go(1 hunks)coordinator/conf/config.json(1 hunks)coordinator/internal/config/config.go(2 hunks)coordinator/internal/config/config_test.go(1 hunks)coordinator/internal/controller/api/controller.go(1 hunks)coordinator/internal/logic/libzkp/lib.go(1 hunks)coordinator/internal/logic/libzkp/libzkp.h(1 hunks)coordinator/internal/logic/libzkp/mock_universal_task.go(1 hunks)coordinator/internal/logic/libzkp/universal_task.go(2 hunks)coordinator/internal/logic/provertask/batch_prover_task.go(3 hunks)coordinator/internal/logic/provertask/bundle_prover_task.go(1 hunks)coordinator/internal/logic/provertask/chunk_prover_task.go(1 hunks)coordinator/internal/logic/provertask/prover_task.go(4 hunks)coordinator/internal/logic/submitproof/proof_receiver.go(1 hunks)coordinator/internal/logic/verifier/verifier.go(2 hunks)coordinator/internal/utils/codec_validium.go(1 hunks)coordinator/internal/utils/version.go(1 hunks)crates/l2geth/src/rpc_client.rs(3 hunks)crates/libzkp/Cargo.toml(1 hunks)crates/libzkp/src/lib.rs(2 hunks)crates/libzkp/src/proofs.rs(3 hunks)crates/libzkp/src/tasks.rs(3 hunks)crates/libzkp/src/tasks/batch.rs(5 hunks)crates/libzkp/src/tasks/bundle.rs(3 hunks)crates/libzkp/src/tasks/chunk.rs(4 hunks)crates/libzkp/src/tasks/chunk_interpreter.rs(2 hunks)crates/libzkp/src/verifier.rs(2 hunks)crates/libzkp/src/verifier/universal.rs(2 hunks)crates/libzkp_c/src/lib.rs(1 hunks)crates/prover-bin/assets_url_preset.json(1 hunks)rollup/cmd/rollup_relayer/app/app.go(1 hunks)rollup/go.mod(2 hunks)rollup/internal/controller/relayer/l2_relayer.go(1 hunks)rollup/internal/controller/sender/estimategas.go(1 hunks)rollup/internal/controller/watcher/l2_watcher.go(4 hunks)rollup/internal/utils/utils.go(3 hunks)rollup/tests/integration_tool/imports.go(5 hunks)rollup/tests/integration_tool/main.go(1 hunks)tests/prover-e2e/.gitignore(1 hunks)tests/prover-e2e/Makefile(2 hunks)tests/prover-e2e/README.md(1 hunks)tests/prover-e2e/cloak-xen/.make.env(1 hunks)tests/prover-e2e/cloak-xen/config.json(1 hunks)tests/prover-e2e/prepare/dump_block_records.sql(1 hunks)tests/prover-e2e/prepare/dump_chunk_records.sql(1 hunks)tests/prover-e2e/seoplia/config.json(1 hunks)
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-10-18T06:49:24.796Z
Learnt from: Thegaram
Repo: scroll-tech/scroll PR: 1746
File: rollup/internal/controller/sender/sender.go:637-640
Timestamp: 2025-10-18T06:49:24.796Z
Learning: In the file `rollup/internal/controller/sender/sender.go`, the resubmission logic in `createReplacingTransaction` does not convert V0 blob sidecars to V1 when resubmitting transactions after the Fusaka upgrade. This is an accepted edge case because it's unlikely to occur, geth is expected to handle it gracefully, and manual recovery is available if needed.
Applied to files:
rollup/internal/utils/utils.gorollup/internal/controller/relayer/l2_relayer.go
📚 Learning: 2024-10-20T16:13:20.397Z
Learnt from: colinlyguo
Repo: scroll-tech/scroll PR: 1530
File: rollup/internal/controller/watcher/batch_proposer.go:291-294
Timestamp: 2024-10-20T16:13:20.397Z
Learning: In `batch_proposer.go`, it's acceptable to call `utils.CalculateBatchMetrics` multiple times within the loop because the batch's chunks are increasing in the loop, and each calculation reflects the updated batch state.
Applied to files:
rollup/internal/utils/utils.go
📚 Learning: 2025-07-29T16:38:24.647Z
Learnt from: colinlyguo
Repo: scroll-tech/scroll PR: 1714
File: rollup/internal/controller/relayer/l2_relayer.go:1548-1555
Timestamp: 2025-07-29T16:38:24.647Z
Learning: In rollup/internal/controller/relayer/l2_relayer.go, the validateBatchFields function should error out when GetBatchByHash fails to find a parent batch. This is intentional behavior - missing parent batches represent genuine error conditions that should halt batch submission processing. Genesis batch handling occurs separately from normal batch validation flow.
Applied to files:
rollup/internal/utils/utils.gorollup/internal/controller/relayer/l2_relayer.gorollup/cmd/rollup_relayer/app/app.gorollup/tests/integration_tool/imports.go
📚 Learning: 2025-09-16T02:09:51.657Z
Learnt from: noel2004
Repo: scroll-tech/scroll PR: 1736
File: crates/libzkp/src/verifier/universal.rs:35-45
Timestamp: 2025-09-16T02:09:51.657Z
Learning: The verify_stark_proof method in scroll-zkvm-verifier returns Result<(), Error> indicating success/failure, not Result<bool, Error>. When it succeeds, verification passed; when it fails, it returns an error.
Applied to files:
crates/libzkp/src/verifier/universal.rscrates/libzkp/src/tasks.rscoordinator/internal/logic/submitproof/proof_receiver.go
📚 Learning: 2025-04-15T08:52:44.176Z
Learnt from: colinlyguo
Repo: scroll-tech/scroll PR: 1645
File: rollup/proposer-tool-config.json:34-40
Timestamp: 2025-04-15T08:52:44.176Z
Learning: In configuration files like `rollup/proposer-tool-config.json`, placeholders such as `<mainnet read db config>` are intentionally left as-is to be replaced by users with their own configuration values when deploying the tool.
Applied to files:
tests/prover-e2e/seoplia/config.json
🧬 Code graph analysis (24)
coordinator/internal/logic/provertask/bundle_prover_task.go (2)
common/types/message/message.go (1)
BundleTaskDetail(109-115)common/version/version.go (1)
Version(31-31)
crates/libzkp/src/verifier.rs (1)
crates/libzkp/src/verifier/universal.rs (1)
new(19-26)
coordinator/internal/controller/api/controller.go (4)
coordinator/internal/config/config.go (2)
L2(37-42)ProverManager(13-29)coordinator/internal/logic/verifier/verifier.go (1)
NewVerifier(77-105)coordinator/internal/logic/verifier/mock.go (1)
NewVerifier(12-20)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)
rollup/tests/integration_tool/main.go (1)
database/config.go (1)
DBConfig(12-19)
crates/libzkp/src/lib.rs (2)
crates/libzkp/src/tasks/chunk.rs (1)
try_from_with_interpret(29-90)crates/libzkp/src/tasks/chunk_interpreter.rs (1)
try_from_with_interpret(27-31)
crates/l2geth/src/rpc_client.rs (1)
crates/libzkp/src/tasks/chunk_interpreter.rs (1)
try_fetch_l1_msgs(21-23)
rollup/internal/controller/relayer/l2_relayer.go (2)
coordinator/internal/utils/codec_validium.go (1)
CodecVersion(10-10)rollup/internal/orm/batch.go (2)
Batch(25-75)Batch(83-85)
coordinator/cmd/tool/verify.go (4)
coordinator/internal/logic/verifier/verifier.go (1)
NewVerifier(77-105)coordinator/internal/logic/verifier/mock.go (1)
NewVerifier(12-20)coordinator/internal/config/config.go (2)
ProverManager(13-29)L2(37-42)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)
coordinator/internal/logic/verifier/verifier.go (4)
coordinator/internal/utils/version.go (1)
Version(16-38)coordinator/internal/config/config.go (2)
AssetConfig(66-72)VerifierConfig(75-79)coordinator/internal/logic/verifier/mock.go (1)
NewVerifier(12-20)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)
crates/libzkp_c/src/lib.rs (3)
crates/libzkp_c/src/utils.rs (1)
c_char_to_str(3-6)crates/l2geth/src/lib.rs (1)
get_client(14-19)crates/libzkp/src/lib.rs (1)
checkout_chunk_task(40-49)
crates/libzkp/src/verifier/universal.rs (2)
coordinator/internal/utils/version.go (1)
Version(16-38)crates/libzkp/src/proofs.rs (1)
pi_hash_check(185-203)
coordinator/internal/logic/provertask/chunk_prover_task.go (1)
common/types/message/message.go (1)
ChunkTaskDetail(41-48)
coordinator/internal/logic/provertask/prover_task.go (3)
coordinator/internal/utils/version.go (1)
Version(16-38)coordinator/internal/config/config.go (2)
L2(37-42)Sequencer(52-54)coordinator/internal/logic/libzkp/lib.go (1)
GenerateUniversalTask(96-98)
coordinator/internal/utils/codec_validium.go (1)
coordinator/internal/utils/version.go (2)
Version(16-38)STFVersionMask(10-10)
coordinator/internal/logic/submitproof/proof_receiver.go (1)
common/types/message/message.go (1)
ChunkInfo(132-149)
rollup/cmd/rollup_relayer/app/app.go (3)
rollup/internal/controller/watcher/l2_watcher.go (1)
NewL2WatcherClient(45-63)rollup/internal/config/l2.go (1)
L2Config(10-29)rollup/internal/config/relayer.go (1)
RelayerConfig(63-88)
crates/libzkp/src/tasks/chunk_interpreter.rs (2)
crates/l2geth/src/rpc_client.rs (2)
try_fetch_storage_node(152-170)try_fetch_l1_msgs(172-204)crates/libzkp/src/tasks/chunk.rs (1)
try_from_with_interpret(29-90)
crates/libzkp/src/tasks/batch.rs (3)
coordinator/internal/utils/version.go (1)
Version(16-38)crates/libzkp/src/tasks/batch/utils.rs (4)
to_blob(27-41)blob_to_kzg_commitment(44-48)get_versioned_hash(54-58)get_kzg_proof(66-74)crates/libzkp/src/tasks.rs (1)
check_aggregation_proofs(24-41)
coordinator/internal/config/config.go (1)
coordinator/internal/utils/version.go (1)
Version(16-38)
crates/libzkp/src/tasks/bundle.rs (4)
crates/libzkp/src/tasks/chunk.rs (1)
precheck_and_build_metadata(211-216)crates/libzkp/src/tasks/batch.rs (1)
precheck_and_build_metadata(252-261)crates/libzkp/src/proofs.rs (2)
from(173-178)metadata(188-196)crates/libzkp/src/tasks.rs (1)
check_aggregation_proofs(24-41)
coordinator/internal/logic/provertask/batch_prover_task.go (4)
common/types/message/message.go (3)
ChunkInfo(132-149)BatchTaskDetail(94-106)Byte48(51-53)coordinator/internal/types/get_task.go (1)
GetTaskSchema(12-19)coordinator/internal/utils/version.go (1)
Version(16-38)coordinator/internal/utils/codec_validium.go (2)
CodecVersion(10-10)FromVersion(39-41)
crates/libzkp/src/tasks/chunk.rs (3)
common/types/message/message.go (1)
ChunkInfo(132-149)crates/libzkp/src/tasks/chunk_interpreter.rs (1)
try_from_with_interpret(27-31)crates/libzkp/src/verifier/universal.rs (1)
new(19-26)
crates/libzkp/src/proofs.rs (1)
coordinator/internal/utils/version.go (1)
Version(16-38)
rollup/tests/integration_tool/imports.go (2)
common/types/db.go (1)
ProvingTaskProvedDEPRECATED(150-150)rollup/internal/utils/utils.go (1)
BatchMetrics(59-70)
🪛 Checkov (3.2.334)
coordinator/conf/config.json
[medium] 21-22: Basic Auth Credentials
(CKV_SECRET_4)
tests/prover-e2e/seoplia/config.json
[medium] 4-5: Basic Auth Credentials
(CKV_SECRET_4)
🪛 GitHub Actions: Docker
rollup/internal/controller/watcher/l2_watcher.go
[error] 120-120: Method 'GetL1MessagesInBlock' is undefined for type '*L2WatcherClient'. This indicates missing implementation or dependency issue in the code.
[error] 120-120: Undefined symbol 'eth.QueryModeSynced'. Missing import, dependency, or definition in the codebase.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (33)
common/version/version.go (1)
8-8: Version tag updated correctly.The version bump from v4.6.1 to v4.6.3 aligns with the deployment tag versioning requirement in the PR objectives. The change is self-contained and does not affect the commit extraction or version string construction logic.
crates/prover-bin/assets_url_preset.json (1)
6-8: Verify this addition aligns with prior guidance on preset management.A previous reviewer (noel2004) noted that the "preset" section should not be set if circuit assets are uploaded via CI. This PR now adds three new preset entries for the 0.6.0-rc.1 release. Please confirm:
- Whether the 0.6.0-rc.1 assets are being uploaded via CI (and if so, why they need preset entries)
- Whether there was a deliberate strategy change from the prior guidance
- Whether this is essential for the CLOAK privacy solution feature, or if it should be deferred to a later CI/asset-upload refactor
rollup/internal/controller/sender/estimategas.go (1)
53-55: LGTM! Log refinement improves consistency.Removing the "fallback gas limit" field clarifies the error log, as no fallback logic exists in this function. The remaining fields (from, nonce, to address, error) provide sufficient debugging context, and this change aligns the logging pattern with
estimateBlobGas(lines 86-87).Cargo.toml (1)
20-22: Commit 546172b is valid and the dependency update is appropriate.Verification confirms:
- Commit 546172b exists and is tagged as v0.6.0-rc.6 in the zkvm-prover repository
- All three packages (prover, verifier, types) are workspace members with synchronized versioning (all at v0.6.0)
- Updating them together to the same commit is correct and expected behavior for workspace crates
- Changes between versions include intentional feature additions (L3 Validium support) with proper legacy compatibility fixes
No breaking changes or compatibility issues detected. The update is ready.
crates/libzkp/Cargo.toml (1)
8-8: Manually verify the "scroll" feature is available in scroll-zkvm-types.The specified commit 546172b could not be located in the repository, preventing automated verification. While the feature addition aligns with the pattern in adjacent dependencies (
sbv-primitivesandsbv-coreboth use "scroll"), you should:
- Confirm the "scroll" feature exists in
scroll-zkvm-typesat your target commit- Verify no feature conflicts or incompatibilities are introduced
- Ensure the feature is actually utilized by libzkp code or required for future changes
tests/prover-e2e/prepare/dump_chunk_records.sql (1)
10-44: No changes needed—review comment premise appears incorrect.The search results show that
EncryptionKeyexists on message and proof metadata types in Go code, but thechunktable schema contains noencryption_keycolumn across all 28 migrations. The filedump_chunk_records.sqlcorrectly exports all 24 persisted columns from thechunktable. IfEncryptionKeyis part of the Go type system but not persisted to the database, this SQL script requires no changes.Verify: Is
EncryptionKeyintended to be persisted in the database schema, or is it transient/derived at runtime?crates/l2geth/src/rpc_client.rs (1)
8-8: LGTM!The import addition is necessary for the new L1 message fetching functionality and is correctly placed.
tests/prover-e2e/cloak-xen/config.json (1)
7-8: LGTM!The addition of
validium_modeconfiguration aligns with the validium feature rollout across the codebase.coordinator/internal/config/config_test.go (1)
38-48: LGTM!Test configuration properly reflects the new config structure with
validium_modeandsequencerfields, ensuring test coverage for the extended schema.coordinator/internal/logic/libzkp/mock_universal_task.go (1)
17-17: LGTM!The updated function signature correctly adds the
decryptionKeyparameter to match the real implementation. It's expected that mock implementations may not use all parameters.rollup/tests/integration_tool/main.go (1)
89-90: LGTM!The addition of the
ValidiumModefield properly extends the config structure to support validium mode testing.coordinator/internal/logic/provertask/bundle_prover_task.go (1)
276-286: LGTM!The version retrieval and propagation to
BundleTaskDetailis properly implemented with appropriate error handling, consistent with the broader versioning feature across task types.tests/prover-e2e/seoplia/config.json (1)
1-9: LGTM!Test configuration is properly structured with database config and validium mode flag. The hardcoded credentials flagged by static analysis are acceptable for local development test configurations.
crates/libzkp/src/verifier.rs (1)
44-76: Version plumbing looks correct.Passing
cfg.versionintoVerifier::newkeeps initialization deterministic and the updated log line will make future fork/version bumps much easier to audit. Nicely done.coordinator/internal/logic/libzkp/libzkp.h (1)
39-46: Header signature update aligns with the new key flow.Extending the ABI with
decryption_keyand its length keeps the C side in lockstep with the Go bridge and prevents silent struct corruption. Looks good.rollup/cmd/rollup_relayer/app/app.go (1)
117-117: Validium flag is wired through correctly.Threading
cfg.L2Config.RelayerConfig.ValidiumModeinto the watcher keeps the relayer and watcher behavior consistent after the constructor change.coordinator/internal/config/config.go (1)
39-69: Configuration schema captures the new knobs cleanly.Adding
validium_mode, the sequencer key holder, and per-assetversionkeeps the config self-descriptive for both rollup and validium deployments. Thanks for keeping the JSON tags aligned.coordinator/internal/logic/provertask/chunk_prover_task.go (1)
240-253: Chunk task payload now carries the right metadata.Deriving the version once and populating both
VersionandPostMsgQueueHashkeeps the task detail consistent with the updated message schema. Thanks for handling the error path explicitly.rollup/internal/utils/utils.go (1)
142-205: The genesis guardrail and version gate make sense.Deriving
stateRootlocally lets you patch the mis-configured genesis cases without mutating the batch, and restricting the version byte to the known codec upgrades ensures we fail fast if a new codec sneaks in unnoticed. The warning log will also help spot surprises during bring-up.crates/libzkp/src/lib.rs (2)
21-21: LGTM! The VALIDIUM_VERSION constant aligns with the version encoding scheme.The constant value
0b01000001(65) correctly encodes domain=1 and stfVersion=1, matching the validium mode logic incoordinator/internal/utils/version.gowhere(domain << 6) + stfVersion = (1 << 6) + 1 = 65.
40-49: LGTM! The decryption_key parameter is correctly propagated.The function signature change to include
decryption_key: Option<&[u8]>is properly wired through to the universal task generation path viaChunkProvingTask::try_from_with_interpret.crates/libzkp/src/verifier/universal.rs (2)
19-26: LGTM! Version-based initialization is correctly implemented.The constructor properly accepts a
u8version number and converts it to theVersiontype usingVersion::from(ver_n). This aligns with the broader version-based refactoring across the codebase.
30-56: LGTM! Version-based verification is correctly implemented.All
pi_hash_checkcalls have been consistently updated fromself.forktoself.version, maintaining the verification logic while supporting the version-based flow. The verify method correctly handles theResult<(), Error>return type fromverify_stark_proof.Based on learnings
coordinator/internal/logic/libzkp/lib.go (1)
96-98: LGTM! The decryptionKey parameter is correctly added and propagated.The function signature update adds the
decryptionKey []byteparameter and correctly forwards it to the underlyinggenerateUniversalTaskimplementation, maintaining consistency with the decryption key flow across the codebase.crates/libzkp/src/proofs.rs (2)
185-203: LGTM! Version-based pi_hash_check is correctly implemented.The method signature has been properly updated from
ForkNametoversion::Version, and the internal logic correctly usespi_hash_by_version(ver)instead ofpi_hash_by_fork. This maintains the public input hash verification while supporting the new version-based architecture.
256-256: LGTM! Test data updated for new BundleInfo structure.The addition of
encryption_key: Nonecorrectly aligns with the updatedBundleInfostructure that now includes encryption key handling for validium mode support.coordinator/internal/utils/version.go (2)
13-38: I need to gather more context to verify the remaining concerns. Let me search for additional information:Now let me search the codebase to understand how the
ValidiumModeis used and whether the hardcoding is intentional:Remove the incorrect claim about unused STFVersionMask and clarify the design questions requiring team input.
The original concern contains a factual error:
STFVersionMaskis NOT unused. It's actively exported and used incodec_validium.go:40in theFromVersion()function to extract the STF version from a packed version byte. That function already provides the extraction logic suggested by your comment, making a separateExtractSTFVersion()helper redundant.Regarding the hardcoded validium mode implementation (line 22 setting
stfVersion = 1and the TODO on line 15), these are design decisions that require clarification from the team rather than clear bugs:
- Clarify whether the
stfVersion = 1for validium mode is intentional and whether it should derive fromhardForkName- Document what specific scenarios the TODO comment refers to
Likely an incorrect or invalid review comment.
25-34: Verify hardForkName handling is sufficient.The function only supports three hardForkNames: "euclidv1" (6), "euclidv2" (7), and "feynman" (8). Ensure that:
- These are the only fork names currently returned by
encoding.GetHardforkName()- The stfVersion values align with the corresponding fork implementations
- Future forks will be added to this switch statement
Note: The code includes a TODO comment ("This is not foolproof and does not cover all scenarios"), indicating known incompleteness. The
encodingpackage definition is not accessible in the repository, so manual verification of the contract betweenVersion()andencoding.GetHardforkName()is required to confirm no additional fork names can be returned.coordinator/internal/logic/provertask/batch_prover_task.go (1)
268-299: Encryption key propagation looks correct.Thanks for carrying the per-chunk encryption key through the task payload; this keeps
ChunkInfoaligned with the updated message schema and removes the need for downstream hacks.crates/libzkp/src/tasks.rs (1)
24-36: Version-based aggregation guard looks good.Switching the aggregation proof check to use
Versionkeeps the validation logic consistent with the new public-input plumbing. 👍rollup/tests/integration_tool/imports.go (1)
127-192: Nice pre-seeding of the L1 queue state.Inserting a leading chunk when L1 messages appear mid-range should keep the message-queue hashes continuous for the generated fixtures—thanks for tackling that edge case.
crates/libzkp/src/tasks/chunk.rs (1)
67-90: Validium inputs plumbing looks solid.Fetching the per-block validium payloads and packaging them with the decryption key keeps the witness builder self-contained—nice work.
crates/libzkp/src/tasks/batch.rs (1)
198-206: Validium invariants enforcedThe explicit asserts for “no blob DA in validium” keep the witness path honest and will flush any misconfigured coordinator quickly—nice guardrails.
| type RawBytes []byte | ||
|
|
||
| func (r RawBytes) MarshalJSON() ([]byte, error) { | ||
| if r == nil { | ||
| return []byte("null"), nil | ||
| } | ||
| // Marshal the []byte as a JSON array of numbers | ||
| rn := make([]uint16, len(r)) | ||
| for i := range r { | ||
| rn[i] = uint16(r[i]) | ||
| } | ||
| return json.Marshal(rn) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add RawBytes UnmarshalJSON to keep protocols working
Switching RawBytes to emit [0,1,...] arrays breaks every consumer that unmarshals these messages in Go: json.Unmarshal for a []byte-based type still expects a base64 string and now fails on the array. We need a matching UnmarshalJSON that accepts the new numeric representation (and ideally the legacy base64 string) so coordinator ↔ prover payloads keep round-tripping.
type RawBytes []byte
func (r RawBytes) MarshalJSON() ([]byte, error) {
if r == nil {
return []byte("null"), nil
}
// Marshal the []byte as a JSON array of numbers
rn := make([]uint16, len(r))
for i := range r {
rn[i] = uint16(r[i])
}
return json.Marshal(rn)
}
+
+func (r *RawBytes) UnmarshalJSON(data []byte) error {
+ var ints []uint16
+ if err := json.Unmarshal(data, &ints); err == nil {
+ buf := make([]byte, len(ints))
+ for i, v := range ints {
+ if v > 0xFF {
+ return fmt.Errorf("byte value %d out of range", v)
+ }
+ buf[i] = byte(v)
+ }
+ *r = RawBytes(buf)
+ return nil
+ }
+
+ if string(data) == "null" {
+ *r = nil
+ return nil
+ }
+
+ var encoded string
+ if err := json.Unmarshal(data, &encoded); err != nil {
+ return err
+ }
+ raw, err := base64.StdEncoding.DecodeString(encoded)
+ if err != nil {
+ return err
+ }
+ *r = RawBytes(raw)
+ return nil
+}Don’t forget to add encoding/base64 to the imports.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| type RawBytes []byte | |
| func (r RawBytes) MarshalJSON() ([]byte, error) { | |
| if r == nil { | |
| return []byte("null"), nil | |
| } | |
| // Marshal the []byte as a JSON array of numbers | |
| rn := make([]uint16, len(r)) | |
| for i := range r { | |
| rn[i] = uint16(r[i]) | |
| } | |
| return json.Marshal(rn) | |
| } | |
| type RawBytes []byte | |
| func (r RawBytes) MarshalJSON() ([]byte, error) { | |
| if r == nil { | |
| return []byte("null"), nil | |
| } | |
| // Marshal the []byte as a JSON array of numbers | |
| rn := make([]uint16, len(r)) | |
| for i := range r { | |
| rn[i] = uint16(r[i]) | |
| } | |
| return json.Marshal(rn) | |
| } | |
| func (r *RawBytes) UnmarshalJSON(data []byte) error { | |
| var ints []uint16 | |
| if err := json.Unmarshal(data, &ints); err == nil { | |
| buf := make([]byte, len(ints)) | |
| for i, v := range ints { | |
| if v > 0xFF { | |
| return fmt.Errorf("byte value %d out of range", v) | |
| } | |
| buf[i] = byte(v) | |
| } | |
| *r = RawBytes(buf) | |
| return nil | |
| } | |
| if string(data) == "null" { | |
| *r = nil | |
| return nil | |
| } | |
| var encoded string | |
| if err := json.Unmarshal(data, &encoded); err != nil { | |
| return err | |
| } | |
| raw, err := base64.StdEncoding.DecodeString(encoded) | |
| if err != nil { | |
| return err | |
| } | |
| *r = RawBytes(raw) | |
| return nil | |
| } |
🤖 Prompt for AI Agents
In common/types/message/message.go around lines 117 to 129, add a corresponding
RawBytes.UnmarshalJSON implementation so JSON decoding accepts both the new
numeric-array form and the legacy base64 string; implement UnmarshalJSON to
detect whether the incoming JSON is an array of numbers (convert each number to
a byte) or a string (decode with encoding/base64), return an error for invalid
types, and add "encoding/base64" to the imports.
| vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier, validiumMode) | ||
| if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mock verifier signature to accept validium flag
When you build with the mock_verifier tag, coordinator/internal/logic/verifier/mock.go still exposes NewVerifier(cfg *config.VerifierConfig) (single argument). This call now supplies the new validiumMode flag, so that build breaks with “too many arguments in call to verifier.NewVerifier”. Please update the mock implementation to accept (and ignore) the boolean so both build variants continue to compile.
Apply this diff in coordinator/internal/logic/verifier/mock.go:
-func NewVerifier(cfg *config.VerifierConfig) (*Verifier, error) {
+func NewVerifier(cfg *config.VerifierConfig, _ bool) (*Verifier, error) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier, validiumMode) | |
| if err != nil { | |
| func NewVerifier(cfg *config.VerifierConfig, _ bool) (*Verifier, error) { |
🤖 Prompt for AI Agents
In coordinator/internal/logic/verifier/mock.go (update to match call site used
in coordinator/internal/controller/api/controller.go lines 29-30), change the
mock NewVerifier signature to accept the new second parameter (cfg
*config.VerifierConfig, validium bool), ignore the boolean inside the function,
and return the existing mock verifier as before; also import or reference the
config package if not already imported so the signature compiles.
| decryptionKey, err := hex.DecodeString(b.cfg.Sequencer.DecryptionKey) | ||
| if err != nil { | ||
| return nil, nil, fmt.Errorf("sequencer decryption key hex-decoding failed") | ||
| } | ||
|
|
||
| ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk, decryptionKey) | ||
| if !ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate sequencer decryption key length before FFI
hex.DecodeString will happily return any byte length. If someone configures a non-64-hex-character key, you’ll forward a shorter slice to libzkp, and the Rust side currently asserts len == 32, which will panic inside FFI and abort the coordinator. Please reject unexpected lengths here (allow 0 or 32 bytes) before calling into libzkp.
decryptionKey, err := hex.DecodeString(b.cfg.Sequencer.DecryptionKey)
if err != nil {
return nil, nil, fmt.Errorf("sequencer decryption key hex-decoding failed")
}
+ if l := len(decryptionKey); l > 0 && l != 32 {
+ return nil, nil, fmt.Errorf("sequencer decryption key must be 32 bytes, got %d", l)
+ }
ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk, decryptionKey)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| decryptionKey, err := hex.DecodeString(b.cfg.Sequencer.DecryptionKey) | |
| if err != nil { | |
| return nil, nil, fmt.Errorf("sequencer decryption key hex-decoding failed") | |
| } | |
| ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk, decryptionKey) | |
| if !ok { | |
| decryptionKey, err := hex.DecodeString(b.cfg.Sequencer.DecryptionKey) | |
| if err != nil { | |
| return nil, nil, fmt.Errorf("sequencer decryption key hex-decoding failed") | |
| } | |
| if l := len(decryptionKey); l > 0 && l != 32 { | |
| return nil, nil, fmt.Errorf("sequencer decryption key must be 32 bytes, got %d", l) | |
| } | |
| ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk, decryptionKey) | |
| if !ok { |
🤖 Prompt for AI Agents
In coordinator/internal/logic/provertask/prover_task.go around lines 209 to 215,
after calling hex.DecodeString on the sequencer decryption key, the code does
not validate the length of the decoded bytes. This can cause a panic in the Rust
FFI when libzkp.GenerateUniversalTask receives an invalid key length since it
asserts len == 32. Add a validation check after hex.DecodeString to ensure the
decoded key is exactly 32 bytes in length (or 0 if that's a valid configuration
option). If the length is invalid, return an error before proceeding to the
libzkp.GenerateUniversalTask call to prevent FFI panics that would abort the
coordinator.
| if unmarshalErr := json.Unmarshal([]byte(proofParameter.Proof), &chunkProof); unmarshalErr != nil { | ||
| return unmarshalErr | ||
| } | ||
| log.Info("parse chunkproof", "key", chunkProof.MetaData.ChunkInfo.EncryptionKey) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: Avoid logging encryption keys.
Logging the encryption key at Info level exposes sensitive cryptographic material in production logs. Encryption keys should remain confidential and never be logged, even for debugging purposes.
Consider removing this log statement entirely or, if needed for debugging, ensure it's:
- Only enabled in development environments
- Logged at a higher verbosity level that's disabled by default
- Redacted (e.g., log only a hash or truncated version)
Apply this diff to remove the logging:
- log.Info("parse chunkproof", "key", chunkProof.MetaData.ChunkInfo.EncryptionKey)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log.Info("parse chunkproof", "key", chunkProof.MetaData.ChunkInfo.EncryptionKey) |
🤖 Prompt for AI Agents
In coordinator/internal/logic/submitproof/proof_receiver.go around line 222, the
Info log prints the encryption key
(chunkProof.MetaData.ChunkInfo.EncryptionKey), which exposes sensitive
cryptographic material; remove this log statement entirely or replace it with a
non-sensitive alternative (e.g., log a redacted value such as a truncated string
or a hash), and if kept for temporary debugging ensure it is gated behind a
development-only or debug-level flag that is disabled in production.
| ver := cfg.Version | ||
| if ver == 0 { | ||
| var err error | ||
| ver, err = utils.Version(cfg.ForkName, validiumMode) | ||
| if err != nil { | ||
| panic(err) | ||
| } | ||
| } | ||
|
|
||
| return &rustCircuitConfig{ | ||
| ForkName: cfg.ForkName, | ||
| Version: uint(ver), | ||
| AssetsPath: cfg.AssetsPath, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid panicking on version lookup failure
If utils.Version cannot resolve cfg.ForkName (e.g. a mis-typed fork or stale config), this panic will tear down the coordinator during verifier init. Please propagate the error back to NewVerifier so callers can surface a clean startup failure instead of crashing the process. This means having newRustCircuitConfig return (*rustCircuitConfig, error) and bubbling the error through newRustVerifierConfig/NewVerifier.
-func newRustCircuitConfig(cfg config.AssetConfig) *rustCircuitConfig {
+func newRustCircuitConfig(cfg config.AssetConfig) (*rustCircuitConfig, error) {
ver := cfg.Version
if ver == 0 {
var err error
ver, err = utils.Version(cfg.ForkName, validiumMode)
if err != nil {
- panic(err)
+ return nil, fmt.Errorf("resolve circuit version for fork %q: %w", cfg.ForkName, err)
}
}
- return &rustCircuitConfig{
+ return &rustCircuitConfig{
Version: uint(ver),
AssetsPath: cfg.AssetsPath,
ForkName: cfg.ForkName,
- }
+ }, nil
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In coordinator/internal/logic/verifier/verifier.go around lines 37 to 48, the
helper currently panics when utils.Version fails; change newRustCircuitConfig to
return (*rustCircuitConfig, error) instead of panicking: if cfg.Version is 0
call utils.Version and on error return nil and the error (wrap with context like
"resolve fork version"), otherwise set Version to uint(ver) and return the
config with nil error. Update newRustVerifierConfig and NewVerifier signatures
to propagate the error (returning error to the caller) and update all call sites
to handle and surface startup failures cleanly instead of allowing a panic.
| type daBatchValidiumV1 struct { | ||
| Version CodecVersion `json:"version"` | ||
| BatchIndex uint64 `json:"batch_index"` | ||
| BlobVersionedHash common.Hash `json:"blob_versioned_hash"` | ||
| ParentBatchHash common.Hash `json:"parent_batch_hash"` | ||
| PostStateRoot common.Hash `json:"post_state_root"` | ||
| WithDrawRoot common.Hash `json:"withdraw_root"` | ||
| Commitment common.Hash `json:"commitment"` | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Populate BlobVersionedHash and fix encoded length
daBatchValidiumV1 declares BlobVersionedHash, but decodeDABatchV1 never reads those 32 bytes—the constant daBatchValidiumEncodedLength only reserves room for four hashes. As a result, every decoded batch carries a zero blob hash, so any consumer relying on that commitment will break (e.g. prover tasks assembling DA metadata). Please decode the missing segment and update the length guard so we reject malformed payloads instead of silently zeroing the field.
-const (
- daBatchValidiumEncodedLength = 137
-)
+const (
+ daBatchValidiumEncodedLength = 169
+)
@@
- indexOffset := versionOffset + versionSize
- parentHashOffset := indexOffset + indexSize
+ indexOffset := versionOffset + versionSize
+ blobHashOffset := indexOffset + indexSize
+ parentHashOffset := blobHashOffset + hashSize
stateRootOffset := parentHashOffset + hashSize
withdrawRootOffset := stateRootOffset + hashSize
commitmentOffset := withdrawRootOffset + hashSize
@@
- parentBatchHash := common.BytesToHash(data[parentHashOffset : parentHashOffset+hashSize])
+ blobVersionedHash := common.BytesToHash(data[blobHashOffset : blobHashOffset+hashSize])
+ parentBatchHash := common.BytesToHash(data[parentHashOffset : parentHashOffset+hashSize])
@@
return &daBatchValidiumV1{
- Version: version,
- BatchIndex: batchIndex,
- ParentBatchHash: parentBatchHash,
- PostStateRoot: postStateRoot,
- WithDrawRoot: withdrawRoot,
- Commitment: commitment,
+ Version: version,
+ BatchIndex: batchIndex,
+ BlobVersionedHash: blobVersionedHash,
+ ParentBatchHash: parentBatchHash,
+ PostStateRoot: postStateRoot,
+ WithDrawRoot: withdrawRoot,
+ Commitment: commitment,
}, nil
}Also update the error message to mention the validium codec so diagnostics stay accurate.
Also applies to: 58-92
| let decryption_key = if decryption_key_len > 0 { | ||
| assert_eq!(decryption_key_len, 32, "len(decryption_key) != 32"); | ||
| Some(std::slice::from_raw_parts( | ||
| decryption_key, | ||
| decryption_key_len, | ||
| )) | ||
| } else { | ||
| None | ||
| }; | ||
| match libzkp::checkout_chunk_task(pre_task_str, decryption_key, cli) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not panic across the FFI boundary on bad key length
This assert_eq! will panic whenever the caller hands us an unexpected key size. A panic inside an extern "C" function is undefined behaviour and typically aborts the entire process, so a misconfigured coordinator key would crash everything. Please validate the length and return failed_handling_result() (with an error log) instead of asserting.
- let decryption_key = if decryption_key_len > 0 {
- assert_eq!(decryption_key_len, 32, "len(decryption_key) != 32");
- Some(std::slice::from_raw_parts(
- decryption_key,
- decryption_key_len,
- ))
- } else {
- None
- };
+ let decryption_key = if decryption_key_len > 0 {
+ if decryption_key_len != 32 {
+ tracing::error!(
+ "gen_universal_task received {}-byte decryption key; expected 32",
+ decryption_key_len
+ );
+ return failed_handling_result();
+ }
+ Some(std::slice::from_raw_parts(
+ decryption_key,
+ decryption_key_len,
+ ))
+ } else {
+ None
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let decryption_key = if decryption_key_len > 0 { | |
| assert_eq!(decryption_key_len, 32, "len(decryption_key) != 32"); | |
| Some(std::slice::from_raw_parts( | |
| decryption_key, | |
| decryption_key_len, | |
| )) | |
| } else { | |
| None | |
| }; | |
| match libzkp::checkout_chunk_task(pre_task_str, decryption_key, cli) { | |
| let decryption_key = if decryption_key_len > 0 { | |
| if decryption_key_len != 32 { | |
| tracing::error!( | |
| "gen_universal_task received {}-byte decryption key; expected 32", | |
| decryption_key_len | |
| ); | |
| return failed_handling_result(); | |
| } | |
| Some(std::slice::from_raw_parts( | |
| decryption_key, | |
| decryption_key_len, | |
| )) | |
| } else { | |
| None | |
| }; | |
| match libzkp::checkout_chunk_task(pre_task_str, decryption_key, cli) { |
🤖 Prompt for AI Agents
In crates/libzkp_c/src/lib.rs around lines 161 to 170, the
assert_eq!(decryption_key_len, 32, ...) must be removed because panicking across
the FFI boundary is UB; instead, if decryption_key_len > 0 and != 32 log an
error and immediately return failed_handling_result() (do not panic), otherwise
construct the decryption_key slice when length == 32 as you already do; ensure
the error path uses the same logging mechanism used elsewhere so callers get a
proper failure return instead of an abort.
| github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect | ||
| github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect | ||
| github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect | ||
| github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 // indirect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the beta version status of ecies-go/v2.
The secp256k1 dependency (line 55) is appropriate for cryptographic operations. However, the ecies-go/v2 dependency (line 114) uses a beta version (v2.0.10-beta.1). While this aligns with the PR's encryption/decryption features, please verify that:
- This beta version is intentional and stable enough for production use
- There is no stable release available as an alternative
- The maintainers have committed to supporting this version
Also applies to: 114-114
🌐 Web query:
Latest stable version of scroll-tech/ecies-go package available on GitHub
💡 Result:
The latest stable tag is v2.0.10. [1][2]
Sources:
[1] GitHub tags for scroll-tech/ecies-go (v2.0.10 tag).
[2] scroll-tech/ecies-go repository README.
Update ecies-go/v2 to stable version v2.0.10.
The PR uses v2.0.10-beta.1, but the latest stable version is v2.0.10. Update line 114 to use the stable release unless there's a specific reason to use the beta version.
🤖 Prompt for AI Agents
In rollup/go.mod around line 55, the dependency ecies-go/v2 is pinned to a beta
release; update its version from v2.0.10-beta.1 to the stable v2.0.10 (replace
the existing require/replace entry for github.com/ecies/go/v2 with the stable
tag) and run `go mod tidy` to ensure the module file and sums are consistent.
| if count > 0 { | ||
| log.Info("Fetching encrypted messages in validium mode") | ||
| txs, err = w.GetL1MessagesInBlock(context.Background(), block.Hash(), eth.QueryModeSynced) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to get L1 messages: %v, block hash: %v", err, block.Hash().Hex()) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing validium RPC helpers before merging.
L2WatcherClient (and the embedded ethclient.Client) expose no GetL1MessagesInBlock, and eth.QueryModeSynced is also undefined. The Docker job already fails on these compile errors. Please wire up an existing RPC helper (or implement one) and drop/replace the missing constant before we can merge.
🧰 Tools
🪛 GitHub Actions: Docker
[error] 120-120: Method 'GetL1MessagesInBlock' is undefined for type '*L2WatcherClient'. This indicates missing implementation or dependency issue in the code.
[error] 120-120: Undefined symbol 'eth.QueryModeSynced'. Missing import, dependency, or definition in the codebase.
| for ii := 0; ii < count; ii++ { | ||
| // sanity check | ||
| if blockTxs[ii].AsL1MessageTx().QueueIndex != txs[ii].AsL1MessageTx().QueueIndex { | ||
| return fmt.Errorf("L1 message queue index mismatch at index %d: expected %d, got %d", ii, blockTxs[ii].AsL1MessageTx().QueueIndex, txs[ii].AsL1MessageTx().QueueIndex) | ||
| } | ||
|
|
||
| log.Info("Replacing L1 message tx in validium mode", "index", ii, "queueIndex", txs[ii].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[ii].Hash().Hex(), "originalTxHash", txs[ii].Hash().Hex()) | ||
| blockTxs[ii] = txs[ii] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t assume L1 message txs occupy the first slots.
count only tracks how many L1-message txs are in the block. If a user tx precedes any of them, blockTxs[0] is not an L1 message, so calling AsL1MessageTx() will panic (or we’ll overwrite the wrong transaction). Collect the actual indices of IsL1MessageTx() hits and replace those positions instead to avoid crashes on perfectly valid blocks.
Apply this diff to make the replacement index-aware:
- for ii := 0; ii < count; ii++ {
- // sanity check
- if blockTxs[ii].AsL1MessageTx().QueueIndex != txs[ii].AsL1MessageTx().QueueIndex {
- return fmt.Errorf("L1 message queue index mismatch at index %d: expected %d, got %d", ii, blockTxs[ii].AsL1MessageTx().QueueIndex, txs[ii].AsL1MessageTx().QueueIndex)
- }
-
- log.Info("Replacing L1 message tx in validium mode", "index", ii, "queueIndex", txs[ii].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[ii].Hash().Hex(), "originalTxHash", txs[ii].Hash().Hex())
- blockTxs[ii] = txs[ii]
- }
+ l1Indices := make([]int, 0, count)
+ for idx, tx := range blockTxs {
+ if tx.IsL1MessageTx() {
+ l1Indices = append(l1Indices, idx)
+ }
+ }
+ if len(l1Indices) != len(txs) {
+ return fmt.Errorf("L1 message count mismatch: expected %d, got %d", len(l1Indices), len(txs))
+ }
+ for i, idx := range l1Indices {
+ if blockTxs[idx].AsL1MessageTx().QueueIndex != txs[i].AsL1MessageTx().QueueIndex {
+ return fmt.Errorf("L1 message queue index mismatch at position %d: expected %d, got %d", idx, blockTxs[idx].AsL1MessageTx().QueueIndex, txs[i].AsL1MessageTx().QueueIndex)
+ }
+ log.Info("Replacing L1 message tx in validium mode", "position", idx, "queueIndex", txs[i].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[idx].Hash().Hex(), "originalTxHash", txs[i].Hash().Hex())
+ blockTxs[idx] = txs[i]
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for ii := 0; ii < count; ii++ { | |
| // sanity check | |
| if blockTxs[ii].AsL1MessageTx().QueueIndex != txs[ii].AsL1MessageTx().QueueIndex { | |
| return fmt.Errorf("L1 message queue index mismatch at index %d: expected %d, got %d", ii, blockTxs[ii].AsL1MessageTx().QueueIndex, txs[ii].AsL1MessageTx().QueueIndex) | |
| } | |
| log.Info("Replacing L1 message tx in validium mode", "index", ii, "queueIndex", txs[ii].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[ii].Hash().Hex(), "originalTxHash", txs[ii].Hash().Hex()) | |
| blockTxs[ii] = txs[ii] | |
| } | |
| l1Indices := make([]int, 0, count) | |
| for idx, tx := range blockTxs { | |
| if tx.IsL1MessageTx() { | |
| l1Indices = append(l1Indices, idx) | |
| } | |
| } | |
| if len(l1Indices) != len(txs) { | |
| return fmt.Errorf("L1 message count mismatch: expected %d, got %d", len(l1Indices), len(txs)) | |
| } | |
| for i, idx := range l1Indices { | |
| if blockTxs[idx].AsL1MessageTx().QueueIndex != txs[i].AsL1MessageTx().QueueIndex { | |
| return fmt.Errorf("L1 message queue index mismatch at position %d: expected %d, got %d", idx, blockTxs[idx].AsL1MessageTx().QueueIndex, txs[i].AsL1MessageTx().QueueIndex) | |
| } | |
| log.Info("Replacing L1 message tx in validium mode", "position", idx, "queueIndex", txs[i].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[idx].Hash().Hex(), "originalTxHash", txs[i].Hash().Hex()) | |
| blockTxs[idx] = txs[i] | |
| } |
🤖 Prompt for AI Agents
In rollup/internal/controller/watcher/l2_watcher.go around lines 131 to 139, the
code assumes the first count transactions are L1-message txs and calls
AsL1MessageTx() on them; this can panic or overwrite wrong entries if user txs
precede L1-message txs. Fix by first scanning blockTxs to collect the actual
indices where IsL1MessageTx() is true (until you have collected count indices),
then iterate over those indices to perform the queueIndex sanity check, log and
replace blockTxs[index] = txs[correspondingIndex]. Ensure you map the
replacement txs in the same order they were collected so you replace the correct
positions.
|
e2e tasks for compatibility testing:
All tests should be proven by a v4.6.1 prover |
Purpose or design rationale of this PR
Describe your change. Make sure to answer these three questions: What does this PR do? Why does it do it? How does it do it?
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tagincommon/version.gobeen updated or have you addedbump-versionlabel to this PR?Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
New Features
Chores